Robert Jervis, in his book Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War, offers a lovely example of the need to look closely for real causal relationships and not rely merely on co-occurrence:
... An understanding of the need for comparisons also reminds us that reports like these are exercises in searching on the dependent variable. That is, we have post-mortems only after failure, not after successes. Even if we did them well, and even if we found that certain factors were present in all the cases, we would not be on firm ground in making causal inferences — namely, in providing an explanation — unless we could also establish that those factors were absent in cases of intelligence success. Oxygen is not a cause of intelligence failure despite its being present in all such cases. ...
More mathematically-rigorously, it's crucial to examine false positives and false negatives — or, in the confusing language of Confusion, the "sensitivity" (aka True Positive Rate = TP/(TP+FN)) and "specificity" (aka True Negative Rate = TN/(TN+FP)) ...
(cf. "Reports, Politics, and Intelligence Failures: The Case of Iraq" [1] by Robert Jervis (2006, Journal of Strategic Studies, v.29 n.1), Statistics - A Bayesian Perspective (2010-08-13), Introduction to Bayesian Statistics (2010-11-20), 2013-10-12 - Tesla-Hertz Run - 100 Miler DNF (2013-10-29), Adventure of the Bayesian Clocks - Part One (2013-12-04), ...) - ^z - 2017-06-15